105 research outputs found

    Estimating Vehicle Miles Traveled on Local Roads

    Get PDF
    This research presents a new method to estimate the local road vehicle miles traveled (VMT) with the concept of betweenness centrality. Betweenness centrality is a measure of a node’s or link’s centrality on a network that has been applied popularly in social science and we relate it to traffic volumes. We demonstrate that VMT on local roads exhibits a scale-free property: it follows two piecewise (double) power law distributions. In other words, the total local VMT can be obtained by properly connecting the two distributions at a breakpoint, each having a slope of the power law distribution. We show that the breakpoint can be predicted by using certain network topological measures, which indicates that the breakpoint may be an inherent property for a particular network. We also show that the highest betweenness centrality point can be estimated using network measures. Furthermore, we prove that the estimated VMT is not sensitive to the power of the power law distributions. This research highlights a potentially new direction of effort for local road VMT estimation

    Bayesian imaging inverse problem with SA-Roundtrip prior via HMC-pCN sampler

    Full text link
    Bayesian inference with deep generative prior has received considerable interest for solving imaging inverse problems in many scientific and engineering fields. The selection of the prior distribution is learned from, and therefore an important representation learning of, available prior measurements. The SA-Roundtrip, a novel deep generative prior, is introduced to enable controlled sampling generation and identify the data's intrinsic dimension. This prior incorporates a self-attention structure within a bidirectional generative adversarial network. Subsequently, Bayesian inference is applied to the posterior distribution in the low-dimensional latent space using the Hamiltonian Monte Carlo with preconditioned Crank-Nicolson (HMC-pCN) algorithm, which is proven to be ergodic under specific conditions. Experiments conducted on computed tomography (CT) reconstruction with the MNIST and TomoPhantom datasets reveal that the proposed method outperforms state-of-the-art comparisons, consistently yielding a robust and superior point estimator along with precise uncertainty quantification

    Deep Unrolling Networks with Recurrent Momentum Acceleration for Nonlinear Inverse Problems

    Full text link
    Combining the strengths of model-based iterative algorithms and data-driven deep learning solutions, deep unrolling networks (DuNets) have become a popular tool to solve inverse imaging problems. While DuNets have been successfully applied to many linear inverse problems, nonlinear problems tend to impair the performance of the method. Inspired by momentum acceleration techniques that are often used in optimization algorithms, we propose a recurrent momentum acceleration (RMA) framework that uses a long short-term memory recurrent neural network (LSTM-RNN) to simulate the momentum acceleration process. The RMA module leverages the ability of the LSTM-RNN to learn and retain knowledge from the previous gradients. We apply RMA to two popular DuNets -- the learned proximal gradient descent (LPGD) and the learned primal-dual (LPD) methods, resulting in LPGD-RMA and LPD-RMA respectively. We provide experimental results on two nonlinear inverse problems: a nonlinear deconvolution problem, and an electrical impedance tomography problem with limited boundary measurements. In the first experiment we have observed that the improvement due to RMA largely increases with respect to the nonlinearity of the problem. The results of the second example further demonstrate that the RMA schemes can significantly improve the performance of DuNets in strongly ill-posed problems

    Combination Therapies of Hypomethylating Agents for Elderly Patients with Acute Myeloid Leukemia

    Get PDF
    Older patients with acute myeloid leukemia (AML) are encumbered with poor long-term outcomes due to patient and disease characteristics. Hypomethylating agents (HMAs), acting as DNA methyltransferase (DNMT) inhibitors, have been established as a new treatment option, but they have been associated with relatively low response rates (15%–20% complete remission) when administered separately for treating elderly with AML. However, appropriate combination therapies with decitabine or azacitidine have flourished. The results of randomized trials of various combinations of HMAs with chemotherapy, histone deacetylase inhibitors, monoclonal antibodies, immunomodulatory agents, kinase inhibitors, or bexarotene are summarized

    DualToken-ViT: Position-aware Efficient Vision Transformer with Dual Token Fusion

    Full text link
    Self-attention-based vision transformers (ViTs) have emerged as a highly competitive architecture in computer vision. Unlike convolutional neural networks (CNNs), ViTs are capable of global information sharing. With the development of various structures of ViTs, ViTs are increasingly advantageous for many vision tasks. However, the quadratic complexity of self-attention renders ViTs computationally intensive, and their lack of inductive biases of locality and translation equivariance demands larger model sizes compared to CNNs to effectively learn visual features. In this paper, we propose a light-weight and efficient vision transformer model called DualToken-ViT that leverages the advantages of CNNs and ViTs. DualToken-ViT effectively fuses the token with local information obtained by convolution-based structure and the token with global information obtained by self-attention-based structure to achieve an efficient attention structure. In addition, we use position-aware global tokens throughout all stages to enrich the global information, which further strengthening the effect of DualToken-ViT. Position-aware global tokens also contain the position information of the image, which makes our model better for vision tasks. We conducted extensive experiments on image classification, object detection and semantic segmentation tasks to demonstrate the effectiveness of DualToken-ViT. On the ImageNet-1K dataset, our models of different scales achieve accuracies of 75.4% and 79.4% with only 0.5G and 1.0G FLOPs, respectively, and our model with 1.0G FLOPs outperforms LightViT-T using global tokens by 0.7%
    • …
    corecore